Next-gen and Shader Model 3
With the Shader Model 3 transition slowly making progress, hardcore gamers are looking to what comes next for graphics technology. David's answer is that the future is looking good.
"We're going to see the next generation of shader-based games. At the first generation, we saw people using a shader to emulate the hardware pipeline, and finding 'Hey - this really
is programmable'. After that, they tried to do a few things with more lights, using perhaps eight instead of ten. Then they started to write material shaders, and they made great cloth and metal effects that we saw.
People are now starting to change the lighting model, and are exploring the things that they can do with that. We're seeing High Dynamic Range lighting being added to engines, like Far Cry. However, the next step is going to be games designed from the ground up for HDR - these engines are going to deliver the next level of visual quality."
Of course, the prime example of this is the Unreal Engine 3, which is designed for Shader Model 3 and HDR from the start. However, with developer's like Valve
adding in HDR to existing engines using Shader Model 2, how does David feel those implementations rate?
"It seems like an obvious thing to say, but the quality of HDR done in Shader Model 2 is less than in 3. The human visual system has quite a bit of dynamic range. When you build a picture through components, as graphics engines do, you get round-off errors as you create each object. Using Partial precision in SM2 exacerbates that problem and is less pleasing to the eye.
"Clearly, having HDR bolted on to an engine using SM2 is better than having none, but it's not as good as a HDR implementation that's been in an engine from the beginning.
"As an example, the rendering of
Nalu requires many passes for each frame. When you draw stuff a bunch of times in a bunch of different ways in SM2, you burn precision in each pass. In SM3, with longer shader programmes and better precision, the result is quicker and more accurate."
Using AA with HDR
For those of you with super-duper graphics cards, you will have come across a problem: you can't use Anti-Aliasing when using HDR lighting, for example in Far Cry. In these cases, it's a situation where you have to choose one or the other. Why is this, and when is the problem going to get solved?
"OK, so the problem is this. With a conventional rendering pipeline, you render straight into the final buffer - so the whole scene is rendered straight into the frame buffer and you can apply the AA to the scene right there."
"But with HDR, you render individual components from a scene and then composite them into a final buffer. It's more like the way films work, where objects on the screen are rendered separately and then composited together. Because they're rendered separately, it's hard to apply FSAA (
note the full-screen prefix, not composited-image AA! -Ed) So traditional AA doesn't make sense here."
So if it can't be done in existing hardware, why not create a new hardware feature of the graphics card that will do both?
"It would be expensive for us to try and do it in hardware, and it wouldn't really make sense - it doesn't make sense, going into the future, for us to keep applying AA at the hardware level. What will happen is that as games are created for HDR, AA will be done in-engine according to the specification of the developer.
"Maybe at some point,
that process will be accelerated in hardware, but that's not in the immediate future."
But if the problem is the size of the frame buffer, wouldn't the new range of 512MB cards help this?
"With more frame buffer size, yes, you could possibly get closer. But you're talking more like 2GB than 512MB."
Want to comment? Please log in.